Goto

Collaborating Authors

 zero-inflated poisson bayesian network


Supplementary Material of " Bayesian Causal Structural Learning with Zero-Inflated Poisson Bayesian Networks "

Neural Information Processing Systems

We provide a detailed proof for Theorem 1. We provide an alternative proof for identifiability of Poisson BN. I (G D), where the last equality holds because the integrand is the kernel of a beta distribution. The scRNA-seq experiments were performed on five mice with AhR knockout targeted to intestinal stem cells. On average each mouse contributed 6,000 cells.


Supplementary Material of " Bayesian Causal Structural Learning with Zero-Inflated Poisson Bayesian Networks "

Neural Information Processing Systems

We provide a detailed proof for Theorem 1. We provide an alternative proof for identifiability of Poisson BN. I (G D), where the last equality holds because the integrand is the kernel of a beta distribution. The scRNA-seq experiments were performed on five mice with AhR knockout targeted to intestinal stem cells. On average each mouse contributed 6,000 cells.


Review for NeurIPS paper: Bayesian Causal Structural Learning with Zero-Inflated Poisson Bayesian Networks

Neural Information Processing Systems

Weaknesses: The paper emphasizes its focus on causal structure learning. In doing so it assumes "causal sufficiency", that is, it assumes that there are no latent confounders of the measured variables. Generally, there are many latent confounders of the measured variables in most domains. In the past 20 years, there has been substantial progress in developing graphical representations and algorithms for learning equivalence classes of causal networks from observational data. When causal sufficiency is assumed, the learning of DAG structure is generally called Bayesian network structure learning, not causal structural learning, as in the title of the paper. It would be helpful for the paper to more prominently highlight this assumption.


Review for NeurIPS paper: Bayesian Causal Structural Learning with Zero-Inflated Poisson Bayesian Networks

Neural Information Processing Systems

All of the reviewers agree that this paper is both theoretically and modeling-wise a solid contribution to NeurIPS. My only concerns are that some of the author rebuttal points have not made it into the paper -- all of them should be added I think, in particular the related work (extended), the causal sufficiency clarification, and the run times.


Bayesian Causal Structural Learning with Zero-Inflated Poisson Bayesian Networks

Neural Information Processing Systems

Multivariate zero-inflated count data arise in a wide range of areas such as economics, social sciences, and biology. To infer causal relationships in zero-inflated count data, we propose a new zero-inflated Poisson Bayesian network (ZIPBN) model. We show that the proposed ZIPBN is identifiable with cross-sectional data. The proof is based on the well-known characterization of Markov equivalence class which is applicable to other distribution families. For causal structural learning, we introduce a fully Bayesian inference approach which exploits the parallel tempering Markov chain Monte Carlo algorithm to efficiently explore the multi-modal network space.